Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TL/CUDA: Linear Broadcast for GPU #948

Open
wants to merge 35 commits into
base: master
Choose a base branch
from

Conversation

ikryukov
Copy link
Contributor

@ikryukov ikryukov commented Mar 22, 2024

What

Linear CUDA Broadcast implementation.

Why ?

  • Functional improvement, parity with others communication libraries.
  • Ability to place many ranks on single GPU
  • No GPU blocking, communication initiated from host

How ?

Naive approach where root rank writes data to own shared buffer and others ranks read from it through NVLink.

@ikryukov
Copy link
Contributor Author

ikryukov commented Mar 22, 2024

Configuration string:
./configure --with-ucx=$HPCX_UCX_DIR --with-cuda=/usr/local/cuda --with-mpi=$HPCX_MPI_DIR --enable-gtest --prefix=$PWD/install --with-nvcc-gencode="-gencode=arch=compute_80,code=sm_80" --enable-debug
Run string:
mpirun --mca coll ^hcoll --mca coll_ucc_enable 0 -x LD_LIBRARY_PATH=/home/ikryukov/work/ucc/install/lib:$LD_LIBRARY_PATH -x UCC_CLS=basic -x UCC_TLS=ucp,cuda -x xUCC_LOG_LEVEL=info -x UCC_TL_CUDA_LOG_LEVEL=debug -x UCC_LOG_LEVEL=info -x UCC_CONFIG_FILE= -np 2 ./install/bin/ucc_test_mpi -c bcast --teams world -M cuda -O 0 -S 2

@ikryukov ikryukov marked this pull request as draft March 22, 2024 14:32
@swx-jenkins3
Copy link

Can one of the admins verify this patch?

Copy link
Collaborator

@samnordmann samnordmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me! Thanks!
I only left some minor remarks. Can you, in addition, add this algo to the tests?

src/components/tl/cuda/tl_cuda.h Outdated Show resolved Hide resolved
src/components/tl/cuda/bcast/bcast_linear.c Outdated Show resolved Hide resolved
src/components/tl/cuda/bcast/bcast_linear.c Outdated Show resolved Hide resolved
src/components/tl/cuda/bcast/bcast_linear.c Outdated Show resolved Hide resolved
src/components/tl/cuda/bcast/bcast_linear.c Outdated Show resolved Hide resolved
src/components/tl/cuda/bcast/bcast_linear.c Show resolved Hide resolved
return;
}
} else {
ucc_debug("etask is nullptr");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this case an infinite loop? I am not sure to understand

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is error case, used it for debug, it should not happen in real situations

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, why not using ucc_assert here then?

src/components/tl/cuda/bcast/bcast_linear.c Outdated Show resolved Hide resolved
@ikryukov
Copy link
Contributor Author

Looks good to me! Thanks! I only left some minor remarks. Can you, in addition, add this algo to the tests?

Thanks for review, addressed comments and added test to validate bcast for cuda too.

src/components/tl/cuda/bcast/bcast_linear.c Outdated Show resolved Hide resolved
src/components/tl/cuda/bcast/bcast_linear.c Show resolved Hide resolved
src/components/tl/cuda/tl_cuda.h Outdated Show resolved Hide resolved
@samnordmann samnordmann self-requested a review September 9, 2024 08:16
Copy link
Collaborator

@janjust janjust left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good

@manjugv
Copy link
Contributor

manjugv commented Sep 18, 2024

ping @Sergei-Lebedev

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants